Goto

Collaborating Authors

 surface model




3D Acetabular Surface Reconstruction from 2D Pre-operative X-ray Images using SRVF Elastic Registration and Deformation Graph

Zhang, Shuai, Wang, Jinliang, Konandetails, Sujith, Wang, Xu, Stoyanov, Danail, Mazomenos, Evangelos B.

arXiv.org Artificial Intelligence

Accurate and reliable selection of the appropriate acetabular cup size is crucial for restoring joint biomechanics in total hip arthroplasty (THA). This paper proposes a novel framework that integrates square-root velocity function (SRVF)-based elastic shape registration technique with an embedded deformation (ED) graph approach to reconstruct the 3D articular surface of the acetabulum by fusing multiple views of 2D pre-operative pelvic X-ray images and a hemispherical surface model. The SRVF-based elastic registration establishes 2D-3D correspondences between the parametric hemispherical model and X-ray images, and the ED framework incorporates the SRVF-derived correspondences as constraints to optimize the 3D acetabular surface reconstruction using nonlinear least-squares optimization. Validations using both simulation and real patient datasets are performed to demonstrate the robustness and the potential clinical value of the proposed algorithm. The reconstruction result can assist surgeons in selecting the correct acetabular cup on the first attempt in primary THA, minimising the need for revision surgery. Code and data will be released upon acceptance.


Autonomous Vision-Guided Resection of Central Airway Obstruction

Smith, M. E., Yilmaz, N., Watts, T., Scheikl, P. M., Ge, J., Deguet, A., Kuntz, A., Krieger, A.

arXiv.org Artificial Intelligence

Existing tracheal tumor resection methods often lack the precision required for effective airway clearance, and robotic advancements offer new potential for autonomous resection. We present a vision-guided, autonomous approach for palliative resection of tracheal tumors. This system models the tracheal surface with a fifth-degree polynomial to plan tool trajectories, while a custom Faster R-CNN segmentation pipeline identifies the trachea and tumor boundaries. The electrocautery tool angle is optimized using handheld surgical demonstrations, and trajectories are planned to maintain a 1 mm safety clearance from the tracheal surface. We validated the workflow successfully in five consecutive experiments on ex-vivo animal tissue models, successfully clearing the airway obstruction without trachea perforation in all cases (with more than 90% volumetric tumor removal). These results support the feasibility of an autonomous resection platform, paving the way for future developments in minimally-invasive autonomous resection.


OpenLiDARMap: Zero-Drift Point Cloud Mapping using Map Priors

Kulmer, Dominik, Leitenstern, Maximilian, Weinmann, Marcel, Lienkamp, Markus

arXiv.org Artificial Intelligence

Accurate localization is a critical component of mobile autonomous systems, especially in Global Navigation Satellite Systems (GNSS)-denied environments where traditional methods fail. In such scenarios, environmental sensing is essential for reliable operation. However, approaches such as LiDAR odometry and Simultaneous Localization and Mapping (SLAM) suffer from drift over long distances, especially in the absence of loop closures. Map-based localization offers a robust alternative, but the challenge lies in creating and georeferencing maps without GNSS support. To address this issue, we propose a method for creating georeferenced maps without GNSS by using publicly available data, such as building footprints and surface models derived from sparse aerial scans. Our approach integrates these data with onboard LiDAR scans to produce dense, accurate, georeferenced 3D point cloud maps. By combining an Iterative Closest Point (ICP) scan-to-scan and scan-to-map matching strategy, we achieve high local consistency without suffering from long-term drift. Thus, we eliminate the reliance on GNSS for the creation of georeferenced maps. The results demonstrate that LiDAR-only mapping can produce accurate georeferenced point cloud maps when augmented with existing map priors.


Using a Distance Sensor to Detect Deviations in a Planar Surface

Sifferman, Carter, Sun, William, Gupta, Mohit, Gleicher, Michael

arXiv.org Artificial Intelligence

We investigate methods for determining if a planar surface contains geometric deviations (e.g., protrusions, objects, divots, or cliffs) using only an instantaneous measurement from a miniature optical time-of-flight sensor. The key to our method is to utilize the entirety of information encoded in raw time-of-flight data captured by off-the-shelf distance sensors. We provide an analysis of the problem in which we identify the key ambiguity between geometry and surface photometrics. To overcome this challenging ambiguity, we fit a Gaussian mixture model to a small dataset of planar surface measurements. This model implicitly captures the expected geometry and distribution of photometrics of the planar surface and is used to identify measurements that are likely to contain deviations. We characterize our method on a variety of surfaces and planar deviations across a range of scenarios. We find that our method utilizing raw time-of-flight data outperforms baselines which use only derived distance estimates. We build an example application in which our method enables mobile robot obstacle and cliff avoidance over a wide field-of-view.


Automated Analysis, Reporting, and Archiving for Robotic Nondestructive Assay of Holdup Deposits

Jones, Heather, Maley, Siri, Yonekawa, Kenji, Mousaei, Mohammadreza, Yesso, J. David, Kohanbash, David, Whittaker, William

arXiv.org Artificial Intelligence

ABSTRACT To decommission deactivated gaseous diffusion enrichment facilities, miles of contaminated pipe must be measured. The current method requires thousands of manual measurements, repeated manual data transcription, and months of manual analysis. The Pipe Crawling Activity Measurement System (PCAMS), developed by Carnegie Mellon University and in commissioning for use at the DOE Portsmouth Gaseous Diffusion Enrichment Facility, uses a robot to measure Uranium-235 from inside pipes and automatically log the data. Radiation measurements, as well as imagery, geometric modeling, and precise measurement positioning data are digitally transferred to the PCAMS server. On the server, data can be automatically processed in minutes and summarized for analyst review. Measurement reports are auto-generated with the push of a button. A database specially-configured to hold heterogeneous data such as spectra, images, and robot trajectories serves as archive. This paper outlines the features and design of the PCAMS Post-Processing Software, currently in commissioning for use at the Portsmouth Gaseous Diffusion Enrichment Facility. The analysis process, the analyst interface to the system, and the content of auto-generated reports are each described. Example pipe-interior geometric surface models, illustration of how key report features apply in operational runs, and user feedback are discussed.


Online Learning of Uneven Terrain for Humanoid Bipedal Walking

Yi, Seung Joon (University of Pennsylvania) | Zhang, Byoung Tak (Seoul National University) | Lee, Daniel (University of Pennsylvania)

AAAI Conferences

In this work, we show how to use existing hardware on The main advantage of legged locomotion over wheeled locomotion bipedal robots to address the sensing part of the problem is that legs have the capability of climbing rougher using online machine learning techniques. By incorporating terrain than wheeled or tracked vehicles. Unfortunately, this electronic compliance and foot pressure sensors, the swing ideal is often not achieved in reality, especially for the current foot is used to provide noisy estimates of the local gradient generation of bipedal humanoid robots. Many walking of the contact point, and the computed pose of the foot from controller implementations for humanoid robots assume perfectly joint encoders and the inertial measurement unit is used to flat surfaces, and even a slight deviation in the floor rapidly learn an explicit model of the surface the robot is can lead to serious instabilities in these controllers.


Surface Reconstruction using Learned Shape Models

Solem, Jan E., Kahl, Fredrik

Neural Information Processing Systems

We consider the problem of geometrical surface reconstruction from one or several images using learned shape models. While humans can effortlessly retrieve 3D shape information, this inverse problem has turned out to be difficult to perform automatically. We introduce a framework based on level set surface reconstruction and shape models for achieving this goal. Through this merging, we obtain an efficient and robust method for reconstructing surfaces of an object category of interest. The shape model includes surface cues such as point, curve and silhouette features. Based on ideas from Active Shape Models, we show how both the geometry and the appearance of these features can be modelled consistently in a multi-view context. The complete surface is obtained by evolving a level set driven by a PDE, which tries to fit the surface to the inferred 3D features. In addition, an a priori 3D surface model is used to regularize the solution, in particular, where surface features are sparse. Experiments are demonstrated on a database of real face images.


Surface Reconstruction using Learned Shape Models

Solem, Jan E., Kahl, Fredrik

Neural Information Processing Systems

We consider the problem of geometrical surface reconstruction from one or several images using learned shape models. While humans can effortlessly retrieve 3D shape information, this inverse problem has turned out to be difficult to perform automatically. We introduce a framework based on level set surface reconstruction and shape models for achieving this goal. Through this merging, we obtain an efficient and robust method for reconstructing surfaces of an object category of interest. The shape model includes surface cues such as point, curve and silhouette features. Based on ideas from Active Shape Models, we show how both the geometry and the appearance of these features can be modelled consistently in a multi-view context. The complete surface is obtained by evolving a level set driven by a PDE, which tries to fit the surface to the inferred 3D features. In addition, an a priori 3D surface model is used to regularize the solution, in particular, where surface features are sparse. Experiments are demonstrated on a database of real face images.